对网络规模数据进行培训可能需要几个月的时间。但是,在已经学习或不可学习的冗余和嘈杂点上浪费了很多计算和时间。为了加速训练,我们引入了可减少的持有损失选择(Rho-loss),这是一种简单但原则上的技术,它大致选择了这些训练点,最大程度地减少了模型的概括损失。结果,Rho-loss减轻了现有数据选择方法的弱点:优化文献中的技术通常选择“硬损失”(例如,高损失),但是这种点通常是嘈杂的(不可学习)或更少的任务与任务相关。相反,课程学习优先考虑“简单”的积分,但是一旦学习,就不必对这些要点进行培训。相比之下,Rho-Loss选择了可以学习的点,值得学习的,尚未学习。与先前的艺术相比,Rho-loss火车的步骤要少得多,可以提高准确性,并加快对广泛的数据集,超参数和体系结构(MLP,CNNS和BERT)的培训。在大型Web绑带图像数据集服装1M上,与统一的数据改组相比,步骤少18倍,最终精度的速度少2%。
translated by 谷歌翻译
当图像分类器输出错误的类标签时,可以有助于查看图像中的更改会导致正确的分类。这是产生反事实解释的算法。但是,没有易于可扩展的方法来产生这种反应性。我们开发了一种新的算法,为以低计算成本训练的大图像分类器提供了反事实解释。我们经验与文献中的基线进行了对该算法的比较;我们的小说算法一致地找到了更接近原始输入的反事实。与此同时,这些反事实的现实主义与基线相当。所有实验的代码都可以在https://github.com/benedikthoeltgen/deduce提供。
translated by 谷歌翻译
在许多科学学科中,粗粒因果模型用于解释和预测更细粒度的系统的动态。当然,这些模型需要适当的宏观游戏。检测合适变量的自动化程序将有助于利用越来越多的高维观察数据集。这项工作介绍了一种新颖的算法方法,它受到因果宏观的新表征作为Microstate之间的信息瓶颈。它的一般形式可以适应解决不同科学目标的个人需求。在进一步的转换步骤之后,可以通过添加性噪声模型来研究学习变量之间的因果关系。报告了两个模拟数据和真实气候数据集的实验。在合成数据集中,该算法强大地检测到地面真理变量,并正确地揭示了它们之间的因果关系。在真实的气候数据集中,该算法强大地检测了与EL NINO现象的两个已知变化相对应的两个变量。
translated by 谷歌翻译
Machine learning models are typically evaluated by computing similarity with reference annotations and trained by maximizing similarity with such. Especially in the bio-medical domain, annotations are subjective and suffer from low inter- and intra-rater reliability. Since annotations only reflect the annotation entity's interpretation of the real world, this can lead to sub-optimal predictions even though the model achieves high similarity scores. Here, the theoretical concept of Peak Ground Truth (PGT) is introduced. PGT marks the point beyond which an increase in similarity with the reference annotation stops translating to better Real World Model Performance (RWMP). Additionally, a quantitative technique to approximate PGT by computing inter- and intra-rater reliability is proposed. Finally, three categories of PGT-aware strategies to evaluate and improve model performance are reviewed.
translated by 谷歌翻译
Accurate PhotoVoltaic (PV) power generation forecasting is vital for the efficient operation of Smart Grids. The automated design of such accurate forecasting models for individual PV plants includes two challenges: First, information about the PV mounting configuration (i.e. inclination and azimuth angles) is often missing. Second, for new PV plants, the amount of historical data available to train a forecasting model is limited (cold-start problem). We address these two challenges by proposing a new method for day-ahead PV power generation forecasts called AutoPV. AutoPV is a weighted ensemble of forecasting models that represent different PV mounting configurations. This representation is achieved by pre-training each forecasting model on a separate PV plant and by scaling the model's output with the peak power rating of the corresponding PV plant. To tackle the cold-start problem, we initially weight each forecasting model in the ensemble equally. To tackle the problem of missing information about the PV mounting configuration, we use new data that become available during operation to adapt the ensemble weights to minimize the forecasting error. AutoPV is advantageous as the unknown PV mounting configuration is implicitly reflected in the ensemble weights, and only the PV plant's peak power rating is required to re-scale the ensemble's output. AutoPV also allows to represent PV plants with panels distributed on different roofs with varying alignments, as these mounting configurations can be reflected proportionally in the weighting. Additionally, the required computing memory is decoupled when scaling AutoPV to hundreds of PV plants, which is beneficial in Smart Grids with limited computing capabilities. For a real-world data set with 11 PV plants, the accuracy of AutoPV is comparable to a model trained on two years of data and outperforms an incrementally trained model.
translated by 谷歌翻译
Quantifying the perceptual similarity of two images is a long-standing problem in low-level computer vision. The natural image domain commonly relies on supervised learning, e.g., a pre-trained VGG, to obtain a latent representation. However, due to domain shift, pre-trained models from the natural image domain might not apply to other image domains, such as medical imaging. Notably, in medical imaging, evaluating the perceptual similarity is exclusively performed by specialists trained extensively in diverse medical fields. Thus, medical imaging remains devoid of task-specific, objective perceptual measures. This work answers the question: Is it necessary to rely on supervised learning to obtain an effective representation that could measure perceptual similarity, or is self-supervision sufficient? To understand whether recent contrastive self-supervised representation (CSR) may come to the rescue, we start with natural images and systematically evaluate CSR as a metric across numerous contemporary architectures and tasks and compare them with existing methods. We find that in the natural image domain, CSR behaves on par with the supervised one on several perceptual tests as a metric, and in the medical domain, CSR better quantifies perceptual similarity concerning the experts' ratings. We also demonstrate that CSR can significantly improve image quality in two image synthesis tasks. Finally, our extensive results suggest that perceptuality is an emergent property of CSR, which can be adapted to many image domains without requiring annotations.
translated by 谷歌翻译
Semantic segmentation from aerial views is a vital task for autonomous drones as they require precise and accurate segmentation to traverse safely and efficiently. Segmenting images from aerial views is especially challenging as they include diverse view-points, extreme scale variation and high scene complexity. To address this problem, we propose an end-to-end multi-class semantic segmentation diffusion model. We introduce recursive denoising which allows predicted error to propagate through the denoising process. In addition, we combine this with a hierarchical multi-scale approach, complementary to the diffusion process. Our method achieves state-of-the-art results on UAVid and on the Vaihingen building segmentation benchmark.
translated by 谷歌翻译
Angluin的L*算法使用会员资格和等价查询了解了常规语言的最低(完整)确定性有限自动机(DFA)。它的概率近似正确(PAC)版本用足够大的随机会员查询替换等效查询,以使答案获得高级信心。因此,它可以应用于任何类型的(也是非规范)设备,可以将其视为合成自动机的算法,该算法根据观测值抽象该设备的行为。在这里,我们对Angluin的PAC学习算法对通过引入一些噪音从DFA获得的设备感兴趣。更确切地说,我们研究盎格鲁因算法是否会降低噪声并产生与原始设备更接近原始设备的DFA。我们提出了几种介绍噪声的方法:(1)嘈杂的设备将单词的分类W.R.T.倒置。具有很小概率的DFA,(2)嘈杂的设备在询问其分类W.R.T.之前用小概率修改了单词的字母。 DFA和(3)嘈杂的设备结合了W.R.T.单词的分类。 DFA及其分类W.R.T.柜台自动机。我们的实验是在数百个DFA上进行的。直言不讳地表明,我们的主要贡献表明:(1)每当随机过程产生嘈杂的设备时,盎格鲁因算法的行为都很好,(2)但使用结构化的噪声却很差,并且(3)几乎肯定是随机性的产量具有非竞争性语言的系统。
translated by 谷歌翻译
财产数据的可用性是化学过程开发中的主要瓶颈之一,通常需要耗时且昂贵的实验或将设计空间限制为少数已知分子。这种瓶颈一直是预测性财产模型持续发展的动机。对于新分子的性质预测,群体贡献方法一直在开创性。最近,机器学习加入了更具成熟的财产预测模型。但是,即使取得了最近的成功,将物理约束集成到机器学习模型中仍然具有挑战性。物理约束对于许多热力学特性,例如吉布斯 - 杜纳姆(Gibbs-Dunham)关系至关重要,它将额外的复杂性层引入预测中。在这里,我们介绍了SPT-NRTL,这是一种机器学习模型,以预测热力学一致的活动系数并提供NRTL参数,以便于过程模拟。结果表明,SPT-NRTL在所有官能团的活性系数预测中的精度高于UNIFAC,并且能够以几乎实验的精度预测许多蒸气 - 液位均衡性,如示例性混合物所示。 N-己烷。为了简化SPT-NRTL的应用,用SPT-NRTL计算了100 000 000的NRTL参数,并在线提供。
translated by 谷歌翻译
持续学习(CL,有时也称为增量学习)是机器学习的一种味道,在该口味中,通常会放松或省略固定数据分布的通常假设。当天然应用时,例如CL问题中的DNNS时,数据分布的变化会导致所谓的灾难性遗忘(CF)效应:突然丧失了先前的知识。尽管近年来已经为启用CL做出了许多重大贡献,但大多数作品都解决了受监督的(分类)问题。本文回顾了在其他环境中研究CL的文献,例如通过减少监督,完全无监督的学习和强化学习的学习。除了提出一个简单的模式用于分类CL方法W.R.T.他们的自主权和监督水平,我们讨论了与每种设置相关的具体挑战以及对CL领域的潜在贡献。
translated by 谷歌翻译